Back

Emergency Medicine Journal

BMJ

Preprints posted in the last 7 days, ranked by how well they match Emergency Medicine Journal's content profile, based on 20 papers previously published here. The average preprint has a 0.05% match score for this journal, so anything above that is already an above-average fit.

1
Design and preliminary safety validation of a hybrid deterministic-AI triage system for multilingual primary healthcare: a WhatsApp-based vignette study in South Africa

Nkosi-Mjadu, B. E.

2026-04-22 health informatics 10.64898/2026.04.21.26349781 medRxiv
Top 0.1%
7.0%
Show abstract

BackgroundSouth Africas public healthcare system serves most of the population through approximately 3,900 primary healthcare clinics characterised by long waiting times and high volumes of repeat-prescription visits. No published pre-arrival digital triage system operates across all 11 official South African languages while aligning with the South African Triage Scale (SATS). This paper reports the design and preliminary safety validation of BIZUSIZO, a hybrid deterministic-AI WhatsApp triage system. MethodsBIZUSIZO delivers SATS-aligned triage via WhatsApp, combining AI-assisted free-text classification (Claude Haiku 4.5) with a Deterministic Clinical Safety Layer (DCSL) that overrides AI output for 53 clinical discriminator categories (14 RED, 19 ORANGE, 20 YELLOW) coded in all 11 official languages and independent of AI availability. A five-domain risk factor assessment can only upgrade triage level. One hundred and twenty clinical vignettes in patient language (English, isiZulu, isiXhosa, Afrikaans; 30 per language) were scored against a developer-assigned gold standard with independent blinded nurse review. A 121-vignette multilingual DCSL safety consistency check across all 11 languages and a 220-call post-hoc framing sensitivity evaluation (110 paired vignettes) were also conducted. ResultsUnder-triage was 3.3% (4/120; 95% CI: 0.9%-8.3%) with no RED under-triage; exact concordance was 80.0% (96/120) and quadratic weighted kappa 0.891 (95% CI: 0.827-0.932). One two-level under-triage was observed on a non-RED presentation (V072, isiXhosa burns vignette, ORANGEGREEN); one two-level over-triage was observed (V054, isiZulu deep laceration, YELLOWRED). In the framing sensitivity evaluation, AI-only classification achieved 50.9% RED invariance under adversarial framing; full-pipeline classification achieved 95.0% in four validated languages, with the DCSL rescuing 18 of 23 AI drift cases. ConclusionsA hybrid deterministic-AI triage system with DCSL-based emergency detection achieved zero RED under-triage and consistent RED detection across all 11 official languages. The 16.7% over-triage rate falls within published South African SATS ranges (13.1-49%). A single two-level under-triage event was observed on an isiXhosa burns vignette (ORANGEGREEN) and is discussed in Limitations. Findings are preliminary; prospective validation against independent nurse triage is the necessary next step.

2
Improving Care by FAster risk-STratification through use of high sensitivity point-of-care troponin in patients presenting with possible acute coronary syndrome in the EmeRgency department (ICare-FASTER): a stepped-wedge cluster randomized trial

Than, M.; Pickering, J. W.; Joyce, L. R.; Buchan, V. A.; Florkowski, C. M.; Mills, N. L.; Hamill, L.; Prystowsky, J.; Harger, S.; Reed, M.; Bayless, J.; Feberwee, A.; Attenburrow, T.; Norman, T.; Welfare, O.; Heiden, T.; Kavsak, P.; Jaffe, A. S.; apple, f.; Peacock, W. F.; Cullen, L.; Aldous, S.; Richards, A. M.; Lacey, C.; Troughton, R.; Frampton, C.; Body, R.; Mueller, C.; Lord, S. J.; George, P. M.; Devlin, G.

2026-04-23 cardiovascular medicine 10.64898/2026.04.21.26351433 medRxiv
Top 0.2%
3.7%
Show abstract

BACKGROUND Point-of-care (POC) high-sensitivity cardiac troponin (hs-cTn) testing has the potential to expedite decision-making and reduce emergency department (ED) length of stay for patients presenting with possible myocardial infarction (MI) by ensuring that results are consistently available when looked for by clinicians. We assessed the real-life effectiveness and safety of implementing POC hs-cTn testing in the ED. METHODS We conducted a pragmatic, stepped-wedge cluster randomized trial. The control arm was usual care with an accelerated diagnostic pathway utilizing a single-sample rule-out step with a central laboratory hs-cTn assay. The intervention arm used the same pathway with a POC hs-cTnI. The primary effectiveness outcome was ED length of stay assessed using a generalized linear mixed model, and the safety outcome was 30-day MI or cardiac death. RESULTS Six sites participated with 59,980 ED presentations (44,747 individuals, 61{+/-}19 years, 49.5% female) from February 2023 to January 2025, in which 31,392 presentations were during the intervention arm. After adjustment for co-variates associated with length of stay, the intervention reduced length of stay by 13% (95% confidence intervals [CI], 9 to 16%. P<0.001), corresponding to a reduction of 47 minutes (95%CI, 33 to 61 minutes) from a mean length of stay in the control arm of 376 minutes. The 30-day MI or cardiac death rate was similar in the control and intervention arms (0.39% and 0.39% respectively, P=0.54). CONCLUSIONS Implementation of whole-blood hs-cTnI testing at the POC into an accelerated diagnostic pathway was safe and reduced length of stay in the ED compared with laboratory testing.

3
Most Instability Phases Resolve: Empirical Evidence for Trajectory Plasticity in Multimorbidity Care from Longitudinal Relational Monitoring

Martin, C. M.; henderson, i.; Campbell, D.; Stockman, K.

2026-04-24 health informatics 10.64898/2026.04.22.26351537 medRxiv
Top 0.2%
3.2%
Show abstract

Background: The instability-plasticity framework proposes that multimorbidity trajectories periodically enter instability phases that are vulnerable to escalation but also potentially modifiable through relational intervention. Whether such phases commonly resolve without acute care, or predominantly progress to hospitalisation, has not been quantified at scale. Objective: To quantify instability window outcomes across a longitudinal monitoring cohort; to test whether the characteristics distinguishing admitted from resolved windows reflect within-patient trajectory dynamics or between-patient severity; and to characterise which patient-reported and operator-rated signals reliably precede admission, using both a curated pilot sub-cohort and the full monitoring cohort with an explicit cross-cohort comparison. Methods: Two complementary analyses were conducted on data from the MonashWatch Patient Journey Record (PaJR) relational telehealth system. Instability windows were identified algorithmically (>=2 consecutive calls with Total_Alerts >=3) across the full longitudinal dataset (16,383 calls, 244 patients, 2.5 years) and classified by linkage to ED and hospital admission data. Window characteristics were compared at window, patient, and paired within-patient levels. Pre-admission signal cascades were analysed in two configurations: a curated pilot sub-cohort (64 patients, 280 calls, +/-10-day window, 103 admissions, December 2016-September 2017) and the full monitoring cohort (175 patients, 1,180 pre-admission calls, +/-14-day window, December 2016-July 2019). A three-way cross-cohort comparison decomposed differences between the two configurations into pipeline and population effects. Results: 621 instability windows were identified across 157 patients (64% of the monitored cohort). 67.3% resolved without hospital admission or ED attendance, a rate stable across alert thresholds 1-5. In paired within-patient analysis (n = 70), duration in days (p = 0.002) and multi-domain breadth (p < 0.001) distinguished admitted from resolved windows; alert intensity did not. In the pilot sub-cohort, patient-reported illness prognosis (Q21) was the dominant pre-admission signal (GEE beta = +0.058, AUC = 0.647, p-BH = 0.018). This finding did not replicate in the full cohort: Q21 was non-significant (GEE beta = -0.008, p = 0.154, AUC = 0.507). Cross-cohort analysis identified selective curation of the pilot sub-cohort as the primary explanation. In the full cohort, six signals escalated significantly before admission after Benjamini-Hochberg correction: total alerts, health impairment (Q26), red alerts, self-rated health (Q3), patient concerns (Q1), and operator concern (Q34). Health impairment achieved the highest individual AUC (0.605) and showed the longest pre-admission lead. No individual signal exceeded AUC 0.61. Conclusions: Two thirds of instability phases resolve without hospitalisation, providing direct empirical support for trajectory plasticity as a clinically frequent phenomenon. Within the same patient, persistence - in duration and in the consistency of high-severity multi-domain flagging across calls - distinguishes trajectories that tip into admission from those that resolve. The Q21 signal reversal between cohorts illustrates how selective curation can produce compelling but non-replicable findings in monitoring research. In the full population, objective alert signals and operator judgement, rather than patient illness prognosis, carry the pre-admission signal

4
Effect of NHS surgical hubs on elective primary hip-and-knee replacement volume, length of stay and waiting times: national longitudinal difference-in-differences study

Wen, J.; Anteneh, Z.; Castelli, A.; Street, A.; Gutacker, N.; Scantlebury, A.; Glerum-Brooks, K.; Davies, S.; Bloor, K.; Rangan, A.; Castro Avila, A.; Lampard, P.; Adamson, J.; Sivey, P.

2026-04-22 health policy 10.64898/2026.04.21.26351383 medRxiv
Top 0.3%
1.4%
Show abstract

ObjectivesTo evaluate the effect of surgical hubs on the volume of surgeries, patient waiting times, and length of hospital stay for elective hip and knee replacements in the English NHS. DesignA retrospective longitudinal study using a difference-in-differences approach to compare changes in outcomes at NHS trusts that opened surgical hubs with those that did not. SettingThe study was set in the English NHS, using administrative data from NHS acute trusts providing elective hip and knee replacements between April 2014 and September 2024. ParticipantsThe study included 76 NHS trusts. The treatment group consisted of 29 trusts that opened a surgical hub for trauma and orthopaedic surgery during the study period. The control group consisted of 47 trusts that did not. 48 trusts that performed fewer than 1,000 relevant procedures over the ten-year period or that reported data for fewer than 41 of the 42 quarters in the sample period were excluded. InterventionThe phased introduction of surgical hubs dedicated to elective procedures at 29 NHS trusts between Q1 2020 and Q3 2024. Main outcome measuresThe three main outcomes were, measured at the trust-quarter level: the total number of elective primary hip and knee replacements (surgical volume), the average length of stay in hospital, and the average waiting time from being added to the waiting list to hospital admission. ResultsThe opening of a surgical hub was associated with an increase of 43.75 hip and knee replacement surgeries per quarter (95% CI: 22.22 to 65.28), which represents a 19.1% increase compared to the pre-hub mean. Length of stay was reduced by 0.32 days (95% CI: - 0.48 to -0.16), a 7.8% reduction. There was no statistically significant effect on average waiting times (-14.96 days, 95% CI: -33.11 to 3.19). ConclusionsSurgical hubs appear to be effective at increasing the number of hip and knee replacements and reducing the time patients spend in hospital. However, in this study, they did not lead to a statistically significant reduction in waiting times overall.

5
Comparing prognostic performance and reasoning between large language models and physicians

Gjertsen, M.; Yoon, W.; Afshar, M.; Temte, B.; Leding, B.; Halliday, S.; Bradley, K.; Kim, J.; Mitchell, J.; Sanders, A. K.; Croxford, E. L.; Caskey, J.; Churpek, M. M.; Mayampurath, A.; Gao, Y.; Miller, T.; Kruser, J. M.

2026-04-25 intensive care and critical care medicine 10.64898/2026.04.17.26350898 medRxiv
Top 0.4%
0.9%
Show abstract

Importance: Physicians routinely prognosticate to guide care delivery and shared decision making, particularly when caring for patients with critical illnesses. Yet, these physician estimates are prone to inaccuracy and uncertainty. Artificial intelligence, including large language models (LLMs), show promise in supporting or improving this prognostication. However, the performance of contemporary LLMs in prognosticating for the heterogeneous population of critically ill patients remains poorly understood. Objective: To characterize and compare the performance of LLMs and physicians when predicting 6-month mortality for hospitalized adults who survived critical illness. Design: Embedded mixed methods study with elicitation and comparison of prognostic estimates and reasoning from LLMs and practicing physicians. Setting: The publicly available, deidentified Medical Information Mart for Intensive Care (MIMIC)-IV v2.2 dataset. Participants: We randomly selected 100 hospitalizations of adult survivors of critical illness. Four contemporary LLMs (Open AI GPT-4o, o3- and o4-mini, and DeepSeek-R1) and 7 physicians provided independent prognostic estimates for each case (1,100 total estimates; 400 LLM and 700 physician). Main outcomes and measures: For each case, LLMs and physicians used the hospital discharge summary and demographics to predict 6-month mortality (yes/no) and provide their reasoning (free text). We assessed prognostic performance using accuracy, sensitivity, and specificity, and used inductive, qualitative content analysis to characterize reasonings. Results: Mean physician accuracy for predicting mortality was 70.1% (95% CI 63.7-76.4%), with sensitivity of 59.7% (95% CI 50.6-68.8%) and specificity of 80.6% (95% CI 71.7-88.2%). The top-performing LLM (OpenAI o4-mini) accuracy was 78.0% (95% CI 70.0-86.0%), with sensitivity of 80.0% (95% CI 67.4-90.2%) and specificity of 76.0% (95% CI 63.3-88.0%). The difference between mean physician and top-performing LLM accuracy was not statistically significant (p = 0.5). Qualitative analysis revealed similar patterns in LLM and physician expressed reasoning, except that physicians regularly and explicitly reported uncertainty while LLMs did not. Conclusion and Relevance: In this study, LLMs and physicians achieved comparable, moderate performance in predicting 6-month mortality after critical illness, with similar patterns in expressed reasoning. Our findings suggest LLMs could be used to support prognostication in clinical practice but also raise safety concerns due to the lack of LLM uncertainty expression.

6
Evolving concerns about the COVID-19 pandemic: A content analysis of free-text reports from the UK COVID-19 Public Experiences (COPE) study cohort over a two-year period

Phillips, R.; Wood, F.; Torrens-Burton, A.; Glennan, C.; Sellars, P.; Lowe, S.; Caffoor, A.; Hallingberg, B.; Gillespie, D.; Shepherd, V.; Poortinga, W.; Wahl-Jorgensen, K.; Williams, D.

2026-04-19 public and global health 10.64898/2026.04.16.26351013 medRxiv
Top 0.5%
0.9%
Show abstract

Objectives Concerns about COVID-19 were a key driver of infection-prevention behaviour during the pandemic. The aim of this study was to gain an in-depth longitudinal understanding of the type and frequency of concerns experienced throughout the first two years of the COVID-19 pandemic. Design Content analysis of qualitative descriptions provided in a prospective longitudinal online survey as part of the COVID-19 UK Public Experiences (COPE) Study. Method At baseline (March/April 2020), when the UK entered its first national lockdown, 11,113 adults completed the COPE survey. Follow-up surveys were conducted at 3, 12, 18 and 24 months. Participants were recruited via the HealthWise Wales research registry and social media. Baseline surveys collected demographic and health data, and all waves included an open-ended question about COVID-19 concerns. Content analysis was used to identify the type and frequency of concerns at each time point. Results A total of 41,564 open-text responses were coded into six categories: personal harm (n=16,353), harm to others (n=11,464), social/economic impact (n=6,433), preventing transmission (n=4,843), government/media (n=1,048), and general concerns (n=1,423). The proportion of respondents reporting any concern declined from 75.3% at baseline to 65.8% at 24 months. Over time, concerns about personal harm increased (baseline 41.8% vs. 24-months 52.7%) whereas concerns about harm to others decreased (baseline 48.5% vs. 24-months 28.6%). Concerns about harm were also expressed in relation to clinical vulnerability, lack of trust in government/media, and perceived lack of adherence by others. These were balanced against concerns about wider social and economic impacts of restrictions. Conclusions Public concerns about COVID-19 evolved substantially over the first two years of the pandemic, reflecting changing perceptions of risk and responsibility. Monitoring concerns longitudinally is vital to help guide effective communication and behavioural interventions during future pandemics.

7
Group A Streptococcus Molecular Point of Care testing in a Paediatric Emergency Department

Mills, E. A.; Bingham, R.; Nijman, R. G.; Sriskandan, S.

2026-04-22 infectious diseases 10.64898/2026.04.20.26351279 medRxiv
Top 0.6%
0.8%
Show abstract

BackgroundAn upsurge in Streptococcus pyogenes infections 2022-2023 highlighted potential benefits of point-of-care tests (POCT) to support clinical pathways, prevent outbreaks, and optimise antibiotic use. ObjectivesWe conducted a pilot research study in a west London paediatric emergency department (ED) to determine whether a molecular POCT had potential to alter management in children who were also having a conventional throat swab taken for culture. MethodsChildren <16 years presenting to ED who had a throat swab requested by a clinician were invited to have a second swab taken for research purposes only. Clinical management was unaffected by the research swab result, which was processed using a molecular POCT that was not approved for use in the host NHS Trust. ResultsPrevalence of streptococcal infection was low during the study (May 2023-June 2025); swab positivity in symptomatic children was 12.8% (6/47). Overall, 38/49 (77.6%) participants who had throat swabs received antibiotics. Of those children recommended to receive antibiotics, 29/38 (76.3%) had a negative POCT. Mean time to reporting of positive throat swab culture results was 3.67 days (range 3-5 days) leading to occasional delay in treatment, although POCT identified positive results within minutes. ConclusionAntibiotic use was frequent and could be avoided or stopped by use of a rule out POCT in over three-quarters of children in the ED, if suspicion of S. pyogenes is the main driver for prescribing. POCT were easy to process and produced immediate results compared with culture, in theory enabling timely decision-making and avoiding treatment delay.

8
Assessing the efficacy of behaviourally informed invitation messaging in increasing attendance at the NHS Targeted Lung Health Check: A randomised experimental study

Tan, X.; Danka, M. N.; Urbanski, S.; Kitsawat, P.; McElvaney, T. J.; Jundi, S.; Porter, L.; Gericke, C.

2026-04-24 public and global health 10.64898/2026.04.12.26350693 medRxiv
Top 0.6%
0.7%
Show abstract

Background: Lung cancer screening can reduce lung cancer mortality through early detection, but uptake of the NHS Targeted Lung Health Check (TLHC) programme remains low. Behaviourally informed invitation messages have been proposed as a low-cost approach to increase attendance, but evidence of their effectiveness in lung cancer screening is mixed. Few intervention studies used evidence-based behaviour change frameworks, and rarely tailored invitation strategies to empirically identified barriers and enablers. Methods: In an online experiment, 3,274 adults aged 55-74 years and with a history of smoking were randomised to see one of four behaviourally informed invitation messages or a control message. Participants then rated their intention to attend a TLHC appointment, and selected barriers and enablers to attending from a pre-defined list, which were classified according to the Theoretical Domains Framework. Invitation messages were mapped to Behaviour Change Techniques using the Theory and Techniques Tool. Message conditions were compared on intention to attend TLHC using bootstrapped ANOVA followed by pairwise comparisons. Exploratory counterfactual mediation analyses examined the role of fear in intention to attend. Results: Behaviourally informed invitation messages did not meaningfully increase intention to attend TLHC compared with the control message. While a GP-endorsed message showed a small potential benefit relative to the other conditions, this finding was not robust after adjustment for multiple comparisons. Participants most frequently reported barriers related to Emotion (particularly fear), Social Influence, and Knowledge, while Beliefs about Consequences emerged as the primary enabler of attendance. Only around half of reported barriers and enablers were addressed by the invitation messages. Exploratory analyses found that fear was associated with lower intention to attend a TLHC appointment, yet none of the behaviourally informed messages appeared to reduce fear compared to the control message. Conclusions: Improving lung cancer screening uptake will likely require invitation messages that directly address emotional concerns, particularly fear, alongside credible recommendations. These findings highlight the importance of systematically aligning invitation message content with empirically identified behavioural influences when designing scalable interventions to improve lung cancer screening uptake.

9
A rights-based intervention integrating social work and ophthalmic care for people experiencing or at risk of homelessness

Hassani, A.; Pecar, K.; Soliman, M.; Bunyon, P.; Ellinger, C.; Tulysewskid, G.; Croft, J.; Carillo, C.; Wewegama, G.; du Plessis-Schneider, S.; Estevez, J. J.

2026-04-24 public and global health 10.64898/2026.04.22.26351525 medRxiv
Top 0.7%
0.7%
Show abstract

Background Individuals experiencing or at risk of homelessness face substantial barriers to preventive eye care that are poorly addressed by standard service models. Interdisciplinary optometry-social work collaboration offers a rights-based approach to improving engagement and continuity of care. Methods A convergent mixed-methods study was conducted between February and August 2024 at a multidisciplinary community centre. Clients experiencing or at risk of homelessness received integrated optometry and social work assessment and were prioritised as high, medium, or low based on combined clinical and social risk. Social work follow-up was guided by the Triple Mandate and W-Questions framework. Quantitative data were summarised using mean (SD), median [IQR], or n (%). Qualitative case notes were analysed using content analysis with inductive coding and secondary review for consistency. Results A total of 165 clients had priority categories coded (high: 68; medium: 47; low: 154). Demographic data were available for 132 clients (60% male; mean age 49.5 years [SD 16]); 27% had not completed high school, 89% reported weekly income below AUD 1000, and 28% had vision impairment. Two hundred forty-five case-note entries were consolidated into 146 unique records. SMS (46%) and phone calls (38%) were the most documented contact methods, although only 21% of calls were answered; missed calls (13%) and disconnected numbers (7%) were common. Multi-modal contact was more frequently documented for higher-priority clients. Appointment assistance was the most recorded facilitator (71%), while rights-based supports, including interpreter and transport assistance, were infrequently documented (<=5%). Qualitative analysis identified unstable communication, reliance on informal supports, and service fragmentation as key influences on recall outcomes. Conclusion This study supports an interdisciplinary, rights-based optometry-social work model to address barriers to preventive eye care among people experiencing or at risk of homelessness. Embedding structured handovers and tiered recall processes within community-based services may strengthen continuity and accountability for high-priority clients. Future implementation should evaluate outcomes related to equity of reach, service integration, and sustained engagement in care.

10
Decision-making in patients with ALS: experiences and implications for decision support

Nagase, M.; Hino, K.; Sakamoto, A.; Seo, M.

2026-04-24 nursing 10.64898/2026.04.22.26351518 medRxiv
Top 0.7%
0.5%
Show abstract

Patients with amyotrophic lateral sclerosis (ALS) face critical decisions regarding life-sustaining treatments, such as invasive mechanical ventilation and percutaneous endoscopic gastrostomy. Advance care planning and shared decision-making are standard supportive frameworks but they often fail to account for structural pressures like progressive decline, shifting patient values, and fear of becoming a burden that may influence decision-making. This study explores how patients with ALS interpret ventilator and care options amid progressive physical decline, thereby reconsidering approaches to decision support. Using a qualitative descriptive design, the researcher (a nurse/sociologist) conducted 2-3 hour home interviews with five purposively sampled patients with ALS. Data, including eye-tracking-aided responses, were analysed via Sandelowskis framework. Rigour was ensured through team-based triangulation, independent coding by two researchers, and a reflexive audit trail. Subjective narratives were prioritised without medical record cross-referencing to capture patients experiences. Four categories emerged: (1) Rewriting clinical prognosis into a narrative of exploration via peer models, where meeting active ventilator users transformed future perceptions; (2) The conflict between securing care infrastructure and the burden on family, which greatly influenced the will to survive; (3) Existential fluctuation, where patients intentions shifted with daily fulfilment and family events; and (4) Governance of the body via pre-emptive technology use and training carers as physical extensions. Findings showed decision-making was a multi-layered process redefining lifes meaning within social resources. This necessitate shifting from independent to relational autonomy, where agency relies on care infrastructure, not physical ability. Treatment choice is a dynamic exploration requiring narrative companions to support existential fluctuations. Professionals must coordinate environments to reduce patient indebtedness. Limitations include the small, resource-advantaged sample (N = 5) and reliance on subjective narratives without medical record verification. Living with ALS means governing a new self through relational support and continuous dialogue.

11
Identifying clinician perceived priorities for a real-time wearable system for in-hospital monitoring: findings and evolutions following the COVID-19 pandemic

Vollam, S.; Roman, C.; King, E.; Tarassenko, L.

2026-04-24 health systems and quality improvement 10.64898/2026.04.21.26350610 medRxiv
Top 0.8%
0.5%
Show abstract

A Wearable Monitoring System (WMS), comprising a chest patch, wrist-worn pulse oximeter, and arm-worn blood pressure device, was developed in preparation for a pilot Randomised Controlled Trial (RCT) on a UK surgical ward. The system was designed to support continuous physiological monitoring and early detection of deterioration. An initial prototype user interface was developed by the research team based on prior clinical experience and engineering knowledge. To ensure suitability for clinical practice, iterative user-centred refinement was undertaken through a series of clinician focus groups and wearability assessments. Six focus groups were conducted between November 2019 and May 2021 involving multidisciplinary healthcare professionals. Feedback from these sessions informed successive interface and system modifications. System development spanned the COVID-19 pandemic, during which the WMS was rapidly adapted and deployed to support clinical care on isolation wards. Feedback obtained during this period was incorporated into later versions of the system and provided a unique opportunity to examine changes in clinician priorities under pandemic conditions. Clinicians consistently prioritised alert visibility, alarm fatigue mitigation, parameter flexibility, and centralised monitoring. Notably, preferences regarding alert modality and access mechanisms evolved over time: early enthusiasm for mobile or smartphone-type devices shifted towards a preference for fixed, ward-based displays and audible alerts at the nurses station following pandemic deployment. Building on previous wearability testing in healthy volunteers, wearability testing using a validated questionnaire was completed by 169 patient participants during the RCT. The chest patch and pulse oximeter demonstrated high tolerability, whereas the blood pressure cuff showed poor wearability and was removed from the final system. These findings demonstrate the importance of iterative, clinician-led design for wearable WMS and highlight how extreme clinical contexts such as the COVID-19 pandemic can significantly reshape perceived requirements for safety-critical monitoring technologies.

12
Individualized Forecasting of Headache Attack Risk Using a Continuously Updating Model

Houle, T. T.; Lebowitz, A.; Chtay, I.; Patel, T.; McGeary, D. D.; Turner, D. P.

2026-04-22 neurology 10.64898/2026.04.20.26350119 medRxiv
Top 1%
0.3%
Show abstract

ImportanceMigraine attacks often occur unpredictably, limiting the ability of individuals to initiate timely preventive or preemptive treatment. Short-term probabilistic forecasting of migraine risk could enable more targeted management strategies. ObjectiveTo externally validate the previously developed Headache Prediction Model (HAPRED-I), evaluate an updated continuously learning model (HAPRED-II), and assess the feasibility and short-term safety of delivering individualized probabilistic migraine forecasts directly to patients. Design, Setting, and ParticipantsProspective 8-week cohort study conducted remotely at two academic medical centers in the United States (Massachusetts General Hospital and Wake Forest Health Sciences) between 2015 and 2019. Adults with recurrent migraine or tension-type headache completed twice-daily electronic diaries. A total of 230 participants contributed 23,335 diary entries across 11,862 participant-days of observation. Main Outcomes and MeasuresOccurrence of a headache attack within 24 hours following each evening diary entry. Model performance was evaluated using discrimination (area under the receiver operating characteristic curve [AUC]) and calibration. ResultsExternal validation of HAPRED-I demonstrated modest discrimination (AUC, 0.59; 95% CI, 0.57-0.61) and poor calibration, with predicted probabilities consistently exceeding observed headache risk. In contrast, the continuously updating HAPRED-II model demonstrated progressive improvement in predictive performance as participant-specific data accumulated. Discrimination increased from an AUC of 0.59 (95% CI, 0.57-0.61) during the first 14 days to 0.66 (95% CI, 0.63-0.70) after the first month, accompanied by improved calibration across predicted risk levels. Over the study period, 6999 individualized forecasts were delivered directly to participants. No evidence suggested that receipt of forecasts was associated with increasing headache frequency or worsening predicted headache risk trajectories. Conclusions and RelevanceA static migraine forecasting model demonstrated limited transportability to new individuals. In contrast, models that continuously update within individuals may improve predictive accuracy over time and enable real-time delivery of personalized migraine risk forecasts. Further work incorporating richer physiologic and contextual predictors will likely be necessary before such systems can reliably guide clinical treatment decisions.

13
Therapist effects in real-world rehabilitation outcomes: a cohort study of the nationwide GLA:D osteoarthritis management program in Denmark

Obasohan, P. E.; Palmer, J.; Alderson, D.; Yu, D.; Gronne, D. T.; Roos, E. M.; Skou, S. T.; Peat, G. M.

2026-04-21 rehabilitation medicine and physical therapy 10.64898/2026.04.20.26351120 medRxiv
Top 1%
0.3%
Show abstract

ObjectiveUnlike several other fields of healthcare, little is known about the size of therapist effects on patient outcomes following rehabilitation for musculoskeletal conditions. We aimed to estimate the proportion of variance in patient outcomes from a structured rehabilitation program explained by therapist effects. MethodsFor our observational cohort study we accessed data from the national multicentre Good Life with osteoArthritis in Denmark (GLA:D) osteoarthritis management program. Analyses included 23,021 consecutive eligible adults with hip or knee osteoarthritis (mean (SD) age 65.0 (9.8) years, 71% female) treated by 657 therapists between October 2014 and February 2019. The primary outcome was [&ge;]30% reduction in pain intensity on 0-100 VAS at 3 months. Therapist effects were estimated as the variance partition coefficient (intra-class correlation coefficient (ICC)) from two-level random intercept logistic regression models before and after adjusting for patient-level case-mix factors and therapist-level characteristics (number of patients treated, days since therapist certification). Analyses were repeated for a range of secondary outcomes using multiply imputed data and complete-case analysis. Results52% of patients reported a [&ge;]30% reduction in pain intensity on 0-100 VAS at 3 months. In the null model the ICC was 0.007 (95%CI: 0.005, 0.009), which changed little after adjusting for patient- and therapist-level covariates. Upper confidence limits for ICC estimates across all secondary outcomes in multiply imputed and complete case analyses were less than 0.03. ConclusionsIn a nationally implemented osteoarthritis management program delivered by trained healthcare professionals, therapist effects made a minimal contribution to variation in patient outcomes. KEY MESSAGESO_ST_ABSWhat is already known on this topicC_ST_ABS Therapist effects - defined as the effect of a given therapist on patient outcomes as compared to another therapist - have been observed in several fields of healthcare and have important consequences for selection, training, and service improvement. In musculoskeletal rehabilitation five previous studies suggest that 1-12% of variation in patient-reported outcomes may be attributable to therapist effects, but these estimates were based on relatively small datasets resulting in substantial uncertainty. What this study addsOur cohort study analysed registry data from 2014-2019 on 23,021 patients and 647 trained therapists from the nationally implemented GLA:D structured osteoarthritis management program in Denmark. We found that therapist effects accounted for less than 3% of total variation in patient-reported pain and quality of life outcomes 3 months after beginning the program How this study might affect research, practice, or policyOur findings suggest that contextual factors that relate to therapist effects - therapist characteristics or therapist-patient interaction and alliance - make a minimal contribution to variation in patient outcomes from this structured, group-based rehabilitation intervention. Any contextual effects must be attributable to alternative sources, e.g. patient expectations, intervention setting.

14
The MIND Study: Design, Feasibility, and Baseline Characteristics of a Smartphone-Based Migraine Cohort

Khorsand, B.; Teichrow, D.; Lipton, R. B.; Ezzati, A.

2026-04-21 neurology 10.64898/2026.04.14.26350866 medRxiv
Top 1%
0.3%
Show abstract

ObjectiveTo describe the design, feasibility, and baseline characteristics of the Migraine Impact on Neurocognitive Dynamics (MIND) study, a 30-day smartphone-based cohort for high-frequency assessment of cognition and symptoms in adults with migraine. BackgroundCognitive symptoms are an important component of migraine burden, but they are difficult to measure using single-visit testing or retrospective questionnaires. Repeated smartphone-based assessment may better capture real-world variability in cognition and symptoms. MethodsAdults meeting International Classification of Headache Disorders, 3rd edition, criteria for migraine were enrolled remotely and completed 30 days of once-daily ecological momentary assessments and mobile cognitive tasks delivered through the Mobile Monitoring of Cognitive Change platform. Baseline measures assessed demographics, migraine characteristics, disability, mood, stress, and treatment patterns. Feasibility was evaluated using enrollment, completion, and retention metrics. ResultsA total of 177 participants enrolled (mean age 38.8 {+/-} 11.9 years; 79.7% female), including 80/177 (45.2%) with chronic migraine. Across the 30-day protocol, 3688 daily assessments were completed, representing 70.8% of all possible study days, and 70.6% of participants completed at least 20 days of monitoring. Completion remained above 60% across study days. At baseline, chronic migraine was associated with greater burden than low-frequency and high-frequency episodic migraine, including higher MIDAS scores (98.6 vs. 38.7 and 70.3), more days with concentration difficulty (16.0 vs. 7.9 and 11.5), and more days with functional interference (18.5 vs. 7.6 and 13.0). ConclusionsThe MIND study demonstrates the feasibility of high-frequency smartphone-based assessment of cognition and symptoms in migraine and provides a methodological foundation for future analyses of within-person cognitive and symptom dynamics across the migraine cycle.

15
Prognosis of stroke subtypes in whole population health systems data: a matched cohort study

Hosking, A.; Iveson, M. H.; Sherlock, L.; Mukherjee, M.; Grover, C.; Alex, B.; Parepalli, S.; Mair, G.; Doubal, F.; Whalley, H. C.; Tobin, R.; Wardlaw, J. M.; Al-Shahi Salman, R.; Whiteley, W. N.

2026-04-20 neurology 10.64898/2026.04.17.26351150 medRxiv
Top 1%
0.3%
Show abstract

Background Outcome after stroke varies according to stroke subtype by location, but healthcare systems data studies do not include subtyping information. We linked natural language processing (NLP) of brain imaging reports to routinely collected data to estimate risk of death and other outcomes after stroke subtypes in a nationwide dataset. Methods We applied a previously validated NLP algorithm to all CT and MRI head scan reports in Scotland between 2010 and 2018. We linked the reports to hospital readmissions, prescriptions and death data to identify and characterize people with stroke, and to categorize into deep and cortical ischemic stroke, deep and lobar intracerebral hemorrhage (ICH), subarachnoid hemorrhage, and subdural hemorrhage. We used a matched cohort design, and age- and sex-matched four controls per case who never had a stroke. By subtype, we estimated rehospitalization with stroke, myocardial infarction (MI), cancer, dementia, epilepsy and death, accounting for confounders and competing risk of death. Results From 785,331 people with a head scan, we identified 64,219 with clinical stroke phenotypes (mean age 73.4yrs, 49.5% male), and subtyped 12,616 with deep ischaemic stroke; 14,103 with cortical ischaemic stroke; 1,814 with deep ICH; and 1,456 with lobar ICH. There was higher absolute rate of 1-year hospital readmission for lobar compared with deep ICH (4.9% [95%CI 3.9% - 6.1%] vs 3.4% [2.6% - 4.3%]), higher risk of dementia beyond 6 months after lobar ICH compared to controls than for other stroke subtypes (aHR 3.5 [2.3-5.3]); and higher risk of MI within 6 months of cortical ischemic stroke than for other stroke subtypes (aHR 4.6 [3.4-6.3]). Conclusions NLP of free-text reports linked to coded data successfully subtyped stroke at scale, and we estimated risk of clinically relevant outcomes. Future work should use free text to enable large-scale audit and epidemiology of people with stroke.

16
Post-Discharge Anti-Seizure Medication Use Improves Post-Stroke Survival: An Emulated Target Trial in Older Adults

Sankaranarayanan, M.; Donahue, M. A.; Brooks, J. D.; Sun, S.; Newhouse, J. P.; Blacker, D.; Haneuse, S.; Hernandez-Diaz, S.; Moura, L. M. V. R.

2026-04-20 neurology 10.64898/2026.04.17.26351149 medRxiv
Top 1%
0.3%
Show abstract

ObjectiveLevetiracetam is commonly prescribed for seizure prophylaxis after acute ischemic stroke (AIS) and often continued beyond discharge. While its short-term effectiveness for preventing post-stroke seizures is established, it is unclear whether prolonged use improves survival, particularly in older adults. We estimated the effect of continued levetiracetam use on 90-day mortality among Medicare beneficiaries after AIS. MethodsUsing Traditional Medicare claims data (2008-2021), we identified beneficiaries aged [&ge;]66 years hospitalized for AIS who initiated outpatient levetiracetam within 90 days of discharge. After one month of continued post-stroke use of levetiracetam (start of follow-up), we compared 90-day mortality between patients with a new levetiracetam dispensation within a 14-day grace period post-follow up and those without one. We performed cloning, censoring and weighting to address immortal time bias and estimated standardized mortality risks, risk differences, and 95% confidence intervals (CI). ResultsAmong 3,212 eligible beneficiaries, 1,779 (55.4%) received a new levetiracetam dispensation within the 14-day grace period. Median age was 76 years (IQR 70-83); 57.8% were female. After adjustment for demographics, hospitalization characteristics, timing of initiation, and comorbidities, continued use was associated with lower 90-day mortality than discontinuation (53 vs 62 deaths per 1,000; risk difference -9 per 1,000; 95% CI: (-12,-5)). The reduction was observed primarily among patients aged [&ge;]75 years. SignificanceAmong older Medicare beneficiaries who initiated levetiracetam after AIS, continued outpatient use was associated with modestly lower 90-day mortality, particularly in those aged [&ge;]75 years. These findings suggest potential benefits of levetiracetam continuation beyond the immediate post-stroke period.

17
Harmonising UK primary care prescription records for research: A case study in the UK Biobank

Ytsma, C. R.; Torralbo, A.; Fitzpatrick, N. K.; Pietzner, M.; Louloudis, I.; Nguyen, D.; Ansarey, S.; Denaxas, S.

2026-04-22 health informatics 10.64898/2026.04.21.26351274 medRxiv
Top 1%
0.2%
Show abstract

Objective The aim of this study was to develop and validate an automated, scalable framework to harmonise fragmented UK primary care prescription records into a research-ready dataset by mapping four diverse medical ontologies to a unified, historically comprehensive reference standard. Materials and Methods We used raw prescription records for consented participants in the UK Biobank, in which participants are uniquely characterized by multiple data modalities. Primary care data were preprocessed by selecting one drug code if multiple were recorded, cleaning codes to match reference presentations, expanding code granularity based on drug descriptions, and updating outdated codes to a single reference version. Harmonisation entailed mapping British National Formulary (BNF) and Read2 codes to dm+d, the universal NHS standard vocabulary for uniquely identifying and prescribing medicines. Harmonised dm+d records were then homogenised to a single concept granularity, the Virtual Medicinal Product (VMP). We validated our methods by creating medication profiles mapping contemporary drug prescribing patterns in 312 physical and mental health conditions. Results We preprocessed 57,659,844 records (100%) from 221,868 participants (100%). Of those, 48,950 records were dropped due to lack of drug code. 7,357,572 records (13%) used multiple ontologies. Most (76%) records were encoded in BNF and most had the code granularity expanded via the drug description (N=28,034,282; 49%). 41,244,315 records (72%) were harmonised to dm+d and 99.98% of these were converted to VMP as a homogeneous dataset. Across 312 diseases, we identified 23,352 disease-drug associations with 237 medications (represented as BNF subparagraphs) that survived statistical correction of which most resembled drug - indication pairs. Conclusion Our methodology converts highly fragmented and raw prescription records with inconsistent data quality into a streamlined, enriched dataset at a single reference, version, and granularity of information. Harmonised prescription records can be easily utilised by researchers to perform large-scale analyses in research.

18
The burden of neurogenic orthostatic hypotension in patients with multiple system atrophy: a real-world study

Kmiecik, M. J.; O'Brien, L.; Szpyhulsky, M.; Iodice, V.; Freeman, R.; Jordan, J.; Biaggioni, I.; Kaufmann, H.; Vickery, R.; Miller, A.; Saunders, E.; Rushton, E.; Valle, L.; Norcliffe-Kaufmann, L.

2026-04-22 neurology 10.64898/2026.04.20.26351214 medRxiv
Top 1%
0.2%
Show abstract

BackgroundAlthough neurogenic orthostatic hypotension (nOH) is a common and debilitating feature of multiple system atrophy (MSA), little is known about the burden of symptoms in the real world. ObjectivesTo design and conduct a cross-sectional community-based research survey targeting patients with MSA, with and without nOH. MethodsWe recruited patients with MSA to complete an anonymous online survey covering three core themes: 1) timely diagnosis, 2) nOH pharmacotherapy and refractory symptoms, and 3) confidence in physician knowledge. Responses were grouped by pre-specified diagnostic certainty levels. Relationships between symptoms, function, and pharmacotherapy were assessed using univariate and multivariate methods. ResultsWe analyzed 259 respondents with a self-reported diagnosis of MSA (age: M=64.38, SD=8.09 years; 44% female). In total, 42% also had a diagnosis nOH; 40% had symptoms highly suspicious of nOH, but no diagnosis; and 21% reported having never had their blood pressure measured in the standing position at a clinical visit. Treatment with a pressor agent was independently associated with the presence of other symptoms of autonomic failure. Each additional nOH symptom reported increased the odds of requiring pharmacotherapy by 18%. Yet, despite anti-hypotensive medication use, 97% of patients reported limitations in their ability to bathe, cook, or arise from a chair/bed with 76% needing caregiver support for refractory nOH symptoms. ConclusionsThis cross-sectional representative sample shows nOH is underrecognized and undertreated in MSA patients, leading to substantial functional limitations. It is our hope that these findings are leveraged for planning future trials and advocating for better treatments.

19
International Adaptation of a brief Problem-Solving Skills (the IAPPS trial) training for people in custody with severe mental illness in Poland: an open multicentred, parallel group, feasibility randomised controlled trial.

Perry, A. E.; Zawadzka, M.; Rychlik, J.; Hewitt, C.

2026-04-25 forensic medicine 10.64898/2026.04.24.26351654 medRxiv
Top 1%
0.2%
Show abstract

Objectives: The primary aim of this study was to assess the feasibility of delivering an adapted problem-solving skills (PSS) intervention by quantifying the recruitment, follow-up and completion rates using a brief problem-solving intervention for people with a mental health diagnosis in two Polish prisons. Design: IAPPS is an open, multi-centred, parallel group feasibility randomised controlled trial (RCT). Setting: Two prisons in Poland. Participants: Men in custody aged 18 years and older, having a mental illness and living within the prison therapeutic unit. Interventions: The intervention consisted of an adapted PSS skills intervention plus care as usual (CAU) or care as usual only. Delivered in groups of up to five people in 1.5-hour sessions over the course of two weeks. Main outcome measures: Primary outcomes - rate of recruitment, follow-up, and feasibility to deliver the intervention. Secondary outcomes included measures of depression, general mental health, and coping strategies. Results: 129 male prisoners were screened, 64 were randomly allocated, with a mean age of 53.5 years (SD 14, range 23-84). 59 (95%) prisoners were of Polish origin. Our recruitment rate was 48%. There was differential follow up with those in the intervention group less likely to complete the post-test battery versus those who received care as usual. Outcome measures were successfully collected at both time points. Conclusions We were able to recruit, retain and deliver the intervention within the prison setting; some logistical challenges limited our assessment of intervention engagement. Our data helps to demonstrate how use of the RCT study design can be implemented and delivered within the complex prison environment. Trial registration number ISRCTN 70138247, protocol registration date May 2021

20
Ethnic inequalities in respiratory virus epidemics in England: a mathematical modelling study

Robert, A.; Goodfellow, L.; Pellis, L.; van Leeuwen, E.; Edmunds, W. J.; Quilty, B. J.; van Zandvoort, K.; Eggo, R. M.

2026-04-21 infectious diseases 10.64898/2026.04.18.26350858 medRxiv
Top 2%
0.2%
Show abstract

BackgroundIn England, the burden of respiratory infections varies by ethnicity, contributing to health inequalities, but the role of additional demographic factors remains underexplored. We quantified how differences in social mixing and demographic characteristics between ethnic groups cause inequalities in transmission dynamics. MethodsWe analysed the association between the ethnicity and the number of contacts of 12,484 participants in the 2024-2025 Reconnect social contact survey, using a negative binomial regression model. We simulated respiratory pathogen epidemics using a compartmental model stratified by age, ethnicity, and contact levels, at a national level and in major cities in England. FindingsAfter adjusting for demographic variables, participants of Black and Mixed ethnicities had more contacts than those of White ethnicity (rate ratios (RR): 1.18 [95% Credible Interval (CI): 1.11-1.26], and 1.31 [95% CI: 1.14-1.52]). Participants of Asian ethnicity had fewer contacts (RR: 0.85 [95% CI: 0.79-0.91]). In national-level simulations, individuals of White ethnicity had the lowest attack rates due to demographic differences and mixing patterns. Local demographic structures changed simulated dynamics: attack rates in individuals of Black and Mixed ethnicities were approximately double those of White ethnicity in Birmingham, but less than 60% higher in Liverpool. InterpretationDemographic characteristics and mixing patterns create inequalities in transmission dynamics between ethnicities, while local demographic characteristics and pathogen infectiousness change the expected relative burden. To ensure mitigation strategies are effective and equitable, their evaluation must explicitly account for inequalities arising from local context. FundingMedical Research Council, National Institute for Health and Care Research, Wellcome Trust Research in context Evidence before this studyWe searched PubMed for population-based studies quantifying differences in respiratory infections between ethnic groups, up to 1 April 2026, with no language restrictions. Keywords included: (respiratory pathogens OR influenza OR COVID-19) AND (ethnic* OR race) AND (inequ*) AND (compartmental model OR incidence rate ratio OR hazard ratio). We excluded studies that focused on non-respiratory pathogens (e.g. looking at consequences of COVID-19 on incidence of other pathogens). A population-based cohort study showed that influenza infection risk was higher in South Asian, Black, and Mixed ethnic groups compared to White ethnicity in England. Another population-based cohort study highlighted that during the first wave of COVID-19 in England, the South Asian, Black, and Mixed ethnic groups were more likely to test positive and to be hospitalised than the White ethnic group. Census data in England showed that the distributions of age, household size, household income and employment status differed between ethnic groups, and the recent Reconnect social contact surveys highlighted the impact of each demographic factor on the participants number of contacts. Added value of this studyOur study shows that social contact patterns, mixing, and demographic structure all lead to unequal infection risk between ethnic groups in respiratory pathogen epidemics. Using the largest available social contact survey in England, we show that both the average number of contacts and the proportion of high-contact individuals varied by ethnic group, even after adjusting for participants demographics. These differences, together with mixing patterns and age structure, led to lower expected incidence among individuals of White ethnicity than in all other ethnic groups in simulated outbreaks. The level of inequality between ethnic groups changed when we used different values of pathogen transmissibility. Finally, as ethnic composition and population structure differ between cities in England, our results show differences in expected inequalities at a local level. Implications of all the available evidenceInequalities in infection risk between ethnic groups are context- and pathogen-dependent. They arise from both local population structure and contact patterns. Detailed information on mixing between groups and population structure is needed to accurately measure group-specific infection risk. These findings indicate that public health interventions based only on national-level estimates conceal regional variation in risk and may ultimately increase inequalities. Public health interventions need to be tailored to local contexts to be equitable and effective. Finally, our findings provide a foundation for understanding the progression from infection-risk inequalities to disparities in disease presentation and clinical outcomes.